56 research outputs found

    Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    Get PDF
    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods

    Benchmarking the CM-5 for Image Processing Applications

    Get PDF
    This paper presents benchmarking results for image processing algorithms on the Connection Machine model CM-5 and compares them with the results from the CM-2 and the Sun-4. Image processing algorithms with varying communication and computational requirements were implemented, tested and timed. The performance and the scalabilty of the CM-5 were analyzed and compared with that of the CM-2

    Distributed Memory Compiler Methods for Irregular Problems -- Data Copy Reuse and Runtime Partitioning

    Get PDF
    This paper outlines two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on an iPSC/860 to demonstrate the usefulness of our methods

    Supporting Irregular Distributions in FORTRAN 90D/HPF Compilers

    Get PDF
    This paper presents methods that make it possible to efficiently support irregular problems using data parallel languages. The approach involves the use of a portable, compiler-independent, runtime support library called CHAOS. The CHAOS runtime support library contains procedures that (1) support static and dynamic distributed array partitioning, (2) partition loop iterations and indirection arrays, (3) remap arrays from one distribution to another, and (4) carry out index translation, buffer allocation and communication schedule generation. The CHAOS runtime procedures are used by a prototype Fortran 90D compiler as runtime support for irregular problems. This paper also presents performance results of compiler-generated and hand-parallelized versions of two stripped down applications codes. The first code is derived from an unstructured mesh computational fluid dynamics flow solver and the second is derived from the molecular dynamics code CHARMM. A method is described that makes it possible to emulate irregular distributions in HPF by reordering elements of data arrays and renumbering indirection arrays. The results suggest that HPF compiler could use reordering and renumbering extrinsic functions to obtain performance comparable to that achieved by a compiler for a language (such as Fortran 90D) that directly supports irregular distributions

    Evaluation of Traditional Indian Antidiabetic Medicinal Plants for Human Pancreatic Amylase Inhibitory Effect In Vitro

    Get PDF
    Pancreatic α-amylase inhibitors offer an effective strategy to lower the levels of post prandial hyperglycemia via control of starch breakdown. Eleven Ayurvedic Indian medicinal plants with known hypoglycemic properties were subjected to sequential solvent extraction and tested for α-amylase inhibition, in order to assess and evaluate their inhibitory potential on pancreatic α-amylase. Analysis of 91 extracts, showed that 10 exhibited strong Human Pancreatic Amylase (HPA) inhibitory potential. Of these, 6 extracts showed concentration dependent inhibition with IC50 values, namely, cold and hot water extracts from Ficus bengalensis bark (4.4 and 125 μgmL−1), Syzygium cumini seeds (42.1 and 4.1 μgmL−1), isopropanol extracts of Cinnamomum verum leaves (1.0 μgmL−1) and Curcuma longa rhizome (0.16 μgmL−1). The other 4 extracts exhibited concentration independent inhibition, namely, methanol extract of Bixa orellana leaves (49 μgmL−1), isopropanol extract from Murraya koenigii leaves (127 μgmL−1), acetone extracts from C. longa rhizome (7.4 μgmL−1) and Tribulus terrestris seeds (511 μgmL−1). Thus, the probable mechanism of action of the above fractions is due to their inhibitory action on HPA, thereby reducing the rate of starch hydrolysis leading to lowered glucose levels. Phytochemical analysis revealed the presence of alkaloids, proteins, tannins, cardiac glycosides, flavonoids, saponins and steroids as probable inhibitory compounds

    Run-time and compile-time support for adaptive irregular problems

    Get PDF
    In adaptive irregular problems the data arrays are accessed via indirection arrays, and data access patterns change during computation. Implementing such problems on distributed memory machines requires support for dynamic data partitioning, efficient preprocessing and fast data migration. This research presents efficient runtime primitives for such problems. This new set of primitives is part of the CHAOS library. It subsumes the previous PARTI library which targeted only static irregular problems. To demonstrate the efficacy of the runtime support, two real adaptive irregular applications have been parallelized using CHAOS primitives: a molecular dynamics code (CHARMM) and a particle-in-cell code (DSMC). The paper also proposes extensions to Fortran D which can allow compilers to generate more efficient code for adaptive problems. These language extensions have been implemented in the Syracuse Fortran 90D/HPF prototype compiler. The performance of the compiler parallelized codes is compared with the hand parallelized versions

    Embedding Data Mappers with Distributed Memory Machine Compilers

    Get PDF
    In scalable multiprocessor systems, high performance demands that computational load be balanced evenly among processors and that interprocessor communication be limited as much as possible. Compilation techniques for achieving these goals have been explored extensively in recent years [3, 9, 11, 13, 17, 18]. This research has produced a variety of useful techniques, but most of it has assumed that the programmer specifies the distribution of large data structures among processor memories. A few projects have attempted to automatically derive data distributions for regular problems [12, 10, 8, 1]. In this paper, we study the more challenging problem of automatically choosing data distributions for irregular problems

    Supporting Irregular Distributions Using Data-Parallel Languages

    Get PDF
    Languages such as Fortran D provide irregular distribution schemes that can efficiently support irregular problems. Irregular distributions can also be emulated in HPF. Compilers can incorporate runtime procedures to automatically support these distributions

    Run-time and Compile-time Support for Adaptive Irregular Problems

    Get PDF
    In adaptive irregular problems the data arrays are accessed via indirection arrays, and data access patterns change during computation. Implementing such problems on distributed memory machines requires support for dynamic data partitioning, efficient preprocessing and fast data migration. This research presents efficient runtime primitives for such problems. This new set of primitives is part of the CHAOS library. It subsumes the previous PARTI library which targeted only static irregular problems. To demonstrate the efficacy of the runtime support, two real adaptive irregular applications have been parallelized using CHAOS primitives: a molecular dynamics code (CHARMM) and a particle-in-cell code (DSMC). The paper also proposes extensions to Fortran D which can allow compilers to generate more efficient code for adaptive problems. These language extensions have been implemented in the Syracuse Fortran 90D/HPF prototype compiler. The performance of the compiler parallelized codes is compared with the hand parallelized versions. (Also cross-referenced as UMIACS-TR-94-55

    A Manual for the CHAOS Runtime Library

    Get PDF
    Procedures are presented that are designed to help users efficiently program irregular problems (e.g. unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial dif- ferential equations solvers) on distributed memory machines. These procedures are also designed for use in compilers for distributed memory multiprocessors. The portable CHAOS pro- cedures are designed to support dynamic data distributions and to automatically generate send and receive messsage by capturing communications patterns at runtime. (Also cross-referenced as UMIACS-TR-95-34
    corecore